QA & Testing · Phase 05 of 05

Put the tests in the PR.
Let the pipeline produce
proof.

The deck’s second major theme is enforcement. Generated tests only matter if they are committed with the feature, run automatically on pull requests, and stored as versioned artifacts that support both engineering review and GxP evidence review.

Pull Requests GitHub Actions Coverage Artifacts PR Comments

Tests belong in the same PR as the feature code. No test, no merge.

The workshop frames this as the core habit that turns AI-generated tests into a reliable delivery practice rather than an optional side task.

1

Generate the test file

Use Copilot Chat or Claude Code to create the unit or integration test alongside the feature implementation.

2

Commit code and tests together

The source file and the test file should move through review as one change set. That keeps intent and validation bound to the same record.

3

Open the PR

GitHub Actions detects the new tests and starts the run automatically. The deck’s point is zero manual orchestration for the tester.

4

Surface the summary on the PR

The agent can summarize which tests were added, their pass/fail status, and the requirement IDs they cover so reviewers see the result in context.

GitHub Actions is the automation engine that turns tests into merge gates.

The sample workflow in the deck is simple on purpose: run on pull requests, install dependencies, run the suite, upload artifacts, and comment back to the PR.

1

Trigger on pull request

The workflow starts when the PR opens or updates against the protected branches. The test run is part of code review, not a separate calendar event.

2

Install and execute

The runner checks out the repo, installs dependencies, and executes the automated suite with coverage enabled.

3

Upload test artifacts

Coverage and test outputs are stored on the run so there is durable evidence of exactly what happened for that specific change.

4

Comment back to the PR

A machine-generated summary makes the run readable inside the pull request without forcing reviewers to open raw logs first.

.github/workflows/test.yml — sample CI pipeline
# Triggers on pull requests targeting main or develop
on:
  pull_request:
    branches: [main, develop]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install dependencies
        run: npm ci

      - name: Run tests with coverage
        run: npm test -- --coverage # enforces 90% gate in jest.config

      - name: Upload test artifacts
        uses: actions/upload-artifact@v4
        with:
          name: test-report-${{ github.sha }}
          path: coverage/

      - name: Comment results on PR
        uses: actions/github-script@v7
        with:
          script: |
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: '✅ Tests passed · Coverage: 92% · [REQ-HT-012] ✓'
            })
pull_request npm test --coverage upload-artifact github-script

The PR becomes the audit-ready package: result summary, requirement tags, and stored artifacts.

The deck’s PR slide makes a narrow but important claim: traceability and evidence should be visible on the change record itself.

Evidence 01

REQ tags are visible by design

Requirement IDs appear in test names, so the relationship between the story and the run output is embedded in the artifact instead of maintained in a separate spreadsheet.

Evidence 02

Artifacts are versioned with the change

Coverage reports, test logs, and packaged outputs are stored as PR-linked artifacts that can be retrieved after merge.

Evidence 03

Failing tests block the merge

The workshop insists on structural enforcement. A feature is not “done” if the tests are red, missing, or below the defined threshold.

Prompt Reference
The deck includes reusable prompts for generating unit tests, integration tests, coverage-improvement passes, and PR-result summaries. The consistent pattern is the same one used everywhere else: identify the feature, specify the expected behavior, include [REQ-ID], and request an explicit output format.
Tips & Tricks

Add the requirement tag grep to the CI comment step. A one-liner like grep -ro '\[REQ-[A-Z0-9-]*\]' src/__tests__/ gives you a live traceability report without a separate tool.

Automation is framed as a compliance improvement, not just a speed play.

The GxP slide is one of the deck’s strongest sections because it ties the workflow directly to evidence, traceability, coverage, and change control outcomes.

Before

Evidence and traceability are manually assembled

  • Test evidence depends on screenshots and logs collected by hand.
  • Requirement-to-test mapping lives in separate spreadsheets.
  • Coverage is discovered late, often after code is already written.
  • Change tickets receive evidence manually and sometimes after the fact.
After

Evidence is produced by the workflow itself

  • GitHub Actions emits timestamped test artifacts on every run.
  • REQ tags in test names support automated traceability reports.
  • Coverage thresholds become a PR gate rather than a later discovery.
  • Results are mandatory PR artifacts tied to the code change that produced them.
90% Target Coverage Gate

The deck uses this as the hard example for mature enforcement, with an 80% starting threshold suggested in the rollout plan.

PR Primary Audit Record

The pull request becomes the place where code, tests, results, and requirement coverage are reviewed together.

Auto Traceability Generation

The deck’s compliance argument depends on eliminating manual mapping work wherever the test metadata can carry that burden directly.

Page takeaway: the PR pipeline is not just a convenience layer. In the workshop model, it is the mechanism that turns AI-generated tests into controlled, reviewable, versioned evidence.